23 research outputs found
Recommended from our members
The Crosslinguistic Relationship between Ordering Flexibility and Dependency Length Minimization: A Data-Driven Approach
This paper asks whether syntactic constructions with more flexible constituent orderings have a weaker tendency for dependency length minimization (DLM). For test cases, I use verb phrases in which the head verb has one direct object noun phrase (NP) dependent and exactly on adopositional phrase (PP) dependent adjacent to each other on the same side (e.g. Kobe prasied [NP his oldest daughter] [PP from the stands]). Data from multilingual corpora of 36 languages show that when combining all these languages together, there is no consistent relationship between flexibility and DLM. When looking at specific ordering domains, on average there appears to be a weaker preference for shorter dependencies in constructions with more flexibility mostly in preverbal contexts, while no correlation exists between the two in postverbal domains
Recommended from our members
Dependency Length Minimization and Lexical Frequency in Prepositional Phrase Ordering in English
Previous research has shown cross-linguistically that the human language parser prefers constituent orders that minimize the distance between syntactic heads and their dependents, but the interaction between dependency length minimization (DLM) and other factors governing linear word ordering is still unknown. We examine the effects of DLM, lexical frequency, and the traditional rule of Manner before Place before Time (MPT) in ordering of prepositional phrase (PP) adjuncts in English using corpora in different language genres annotated with syntactic structure. While MPT and DLM were consistently predictive of PP ordering in our analysis, lexical frequency information was sensitive to language genre
An update on genomic-guided therapies for pediatric solid tumors
YesCurrently, out of the 82 US FDA-approved targeted therapies for adult cancer treatments, only three are approved for use in children irrespective of their genomic status. Apart from leukemia, only a handful of genomic-based trials involving children with solid tumors are ongoing. Emerging genomic data for pediatric solid tumors may facilitate the development of precision medicine in pediatric patients. Here, we provide an up-to-date review of all reported genomic aberrations in the eight most common pediatric solid tumors with whole-exome sequencing or whole-genome sequencing data (from cBioPortal database, Pediatric Cancer Genome Project, Therapeutically Applicable Research to Generate Effective Treatments) and additional non-whole-exome sequencing studies. Potential druggable events are highlighted and discussed so as to facilitate preclinical and clinical research in this area.Seed Grant of Strategic Research Theme for Cancer, The University of Hong Kong of AKSC. VWY Lui is funded by the Research Grant Council, Hong Kong (#17114814, #17121616, General Research Fund; T12–401/13-R, Theme-based Research Scheme), and the Start-up Fund, School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong. W Piao is funded by the Faculty Postdoctoral Fellowship Scheme, Faculty of Medicine, the Chinese University of Hong Kong
SIGMORPHON 2021 Shared Task on Morphological Reinflection: Generalization Across Languages
This year's iteration of the SIGMORPHON Shared Task on morphological reinflection focuses on typological diversity and cross-lingual variation of morphosyntactic features. In terms of the task, we enrich UniMorph with new data for 32 languages from 13 language families, with most of them being under-resourced: Kunwinjku, Classical Syriac, Arabic (Modern Standard, Egyptian, Gulf), Hebrew, Amharic, Aymara, Magahi, Braj, Kurdish (Central, Northern, Southern), Polish, Karelian, Livvi, Ludic, Veps, Võro, Evenki, Xibe, Tuvan, Sakha, Turkish, Indonesian, Kodi, Seneca, Asháninka, Yanesha, Chukchi, Itelmen, Eibela. We evaluate six systems on the new data and conduct an extensive error analysis of the systems' predictions. Transformer-based models generally demonstrate superior performance on the majority of languages, achieving >90% accuracy on 65% of them. The languages on which systems yielded low accuracy are mainly under-resourced, with a limited amount of data. Most errors made by the systems are due to allomorphy, honorificity, and form variation. In addition, we observe that systems especially struggle to inflect multiword lemmas. The systems also produce misspelled forms or end up in repetitive loops (e.g., RNN-based models). Finally, we report a large drop in systems' performance on previously unseen lemmas.Peer reviewe